展开的神经网络最近实现了最先进的MRI重建。这些网络通过在基于物理的一致性和基于神经网络的正则化之间交替来展开迭代优化算法。但是,它们需要大型神经网络的几次迭代来处理高维成像任务,例如3D MRI。这限制了基于反向传播的传统训练算法,这是由于较大的记忆力和计算梯度和存储中间激活的计算要求。为了应对这一挑战,我们提出了加速MRI(GLEAM)重建的贪婪学习,这是一种高维成像设置的有效培训策略。 GLEAM将端到端网络拆分为脱钩的网络模块。每个模块都以贪婪的方式优化,并通过脱钩的梯度更新,从而减少了训练过程中的内存足迹。我们表明,可以在多个图形处理单元(GPU)上并行执行解耦梯度更新,以进一步减少训练时间。我们介绍了2D和3D数据集的实验,包括多线圈膝,大脑和动态心脏Cine MRI。我们观察到:i)闪闪发光的概括以及最先进的记忆效率基线,例如具有相同内存足迹的梯度检查点和可逆网络,但训练速度更快1.3倍; ii)对于相同的内存足迹,闪光在2D中产生1.1dB PSNR的增益,而3D在端到端基线中产生1.8 dB。
translated by 谷歌翻译
我们开发了快速算法和可靠软件,以凸出具有Relu激活功能的两层神经网络的凸优化。我们的工作利用了标准的重量罚款训练问题作为一组组-YELL_1 $调查的数据本地模型的凸重新印度,其中局部由多面体锥体约束强制执行。在零规范化的特殊情况下,我们表明此问题完全等同于凸“ Gated Relu”网络的不受约束的优化。对于非零正则化的问题,我们表明凸面式relu模型获得了RELU训练问题的数据依赖性近似范围。为了优化凸的重新制定,我们开发了一种加速的近端梯度方法和实用的增强拉格朗日求解器。我们表明,这些方法比针对非凸问题(例如SGD)和超越商业内部点求解器的标准训练启发式方法要快。在实验上,我们验证了我们的理论结果,探索组-ELL_1 $正则化路径,并对神经网络进行比例凸的优化,以在MNIST和CIFAR-10上进行图像分类。
translated by 谷歌翻译
我们描述了两层向量输出relu神经网络训练问题的凸半无限频体。该半无限的双重承认有限尺寸表示,但其支持在难以表征的凸起集中。特别是,我们证明非凸神经网络训练问题相当于有限维凸形成形程序。我们的工作是第一个确定全球神经网络的全球最佳与连阳性方案之间的强大联系。因此,我们展示了神经网络如何通过半非环境矩阵分解来隐化地揭示求解连接成型程序,并从该配方中汲取关键见解。我们描述了第一算法,用于可证明导航的全局最小值的导航神经网络训练问题,这些算法是固定数据等级的样本数量的多项式,但维度指数是指数。然而,在卷积架构的情况下,计算复杂性在所有其他参数中仅在滤波器大小和多项式中是指数的。我们描述了我们能够完全找到这种神经网络训练问题的全球最佳的环境,并提供了软阈值的SVD,并提供了一种成交量松弛,保证确切地用于某些问题,并与随机的解决方案相对应实践中的梯度下降。
translated by 谷歌翻译
We investigate ensemble methods for prediction in an online setting. Unlike all the literature in ensembling, for the first time, we introduce a new approach using a meta learner that effectively combines the base model predictions via using a superset of the features that is the union of the base models' feature vectors instead of the predictions themselves. Here, our model does not use the predictions of the base models as inputs to a machine learning algorithm, but choose the best possible combination at each time step based on the state of the problem. We explore three different constraint spaces for the ensembling of the base learners that linearly combines the base predictions, which are convex combinations where the components of the ensembling vector are all nonnegative and sum up to 1; affine combinations where the weight vector components are required to sum up to 1; and the unconstrained combinations where the components are free to take any real value. The constraints are both theoretically analyzed under known statistics and integrated into the learning procedure of the meta learner as a part of the optimization in an automated manner. To show the practical efficiency of the proposed method, we employ a gradient-boosted decision tree and a multi-layer perceptron separately as the meta learners. Our framework is generic so that one can use other machine learning architectures as the ensembler as long as they allow for a custom differentiable loss for minimization. We demonstrate the learning behavior of our algorithm on synthetic data and the significant performance improvements over the conventional methods over various real life datasets, extensively used in the well-known data competitions. Furthermore, we openly share the source code of the proposed method to facilitate further research and comparison.
translated by 谷歌翻译
The goal of this work is to localize sound sources in visual scenes with a self-supervised approach. Contrastive learning in the context of sound source localization leverages the natural correspondence between audio and visual signals where the audio-visual pairs from the same source are assumed as positive, while randomly selected pairs are negatives. However, this approach brings in noisy correspondences; for example, positive audio and visual pair signals that may be unrelated to each other, or negative pairs that may contain semantically similar samples to the positive one. Our key contribution in this work is to show that using a less strict decision boundary in contrastive learning can alleviate the effect of noisy correspondences in sound source localization. We propose a simple yet effective approach by slightly modifying the contrastive loss with a negative margin. Extensive experimental results show that our approach gives on-par or better performance than the state-of-the-art methods. Furthermore, we demonstrate that the introduction of a negative margin to existing methods results in a consistent improvement in performance.
translated by 谷歌翻译
为了在医学成像研究中保持标准,图像应具有必要的图像质量,以进行潜在的诊断使用。尽管基于CNN的方法用于评估图像质量,但仍可以从准确性方面提高其性能。在这项工作中,我们通过使用SWIN Transformer来解决此问题,这改善了导致医疗图像质量降解的质量质量差分类性能。我们在胸部X射线(Object-CXR)和心脏MRI上的左心室流出路分类问题(LVOT)上测试了胸部X射线(Object-CXR)和左心室流出路分类问题的方法。虽然我们在Object-CXR和LVOT数据集中获得了87.1%和95.48%的分类精度,但我们的实验结果表明,SWIN Transformer的使用可以改善对象CXR分类性能,同时获得LVOT数据集的可比性能。据我们所知,我们的研究是医学图像质量评估的第一个Vision Transformer应用程序。
translated by 谷歌翻译
深语模型在NLP域中取得了显着的成功。培养深层语言模型的标准方法是从大型未标记的语料库中雇用无监督的学习。但是,这种大型公司仅适用于广泛采用和高资源语言和域名。本研究提出了第一款深语型号DPRK-BERT为朝鲜语言。我们通过编制朝鲜语言的第一个未标记的语料库和微调预先存在的ROK语言模型来实现这一目标。我们将所提出的模型与现有方法进行比较,并显示两个DPRK数据集的显着改进。我们还提供了这种模型的交叉语言版本,其在两种韩语语言中产生了更好的泛化。最后,我们提供与朝鲜语言相关的各种NLP工具,这些工具将培养未来的研究。
translated by 谷歌翻译